Model Report for Baseball Players

Generated on 15 Apr 2025, 00:02   ●   19,878 original samples, 19,878 synthetic samples

Accuracy
94.7%
(98.3%)
Univariate 96.0%
(99.0%)
Bivariate 93.5%
(97.6%)
Similarity
Cosine Similarity 0.99962
(0.99990)
Discriminator AUC 58.0%
(49.7%)
Distances
Identical Matches 0.1%
(0.0%)
Average Distances 0.323
(0.325)

Correlations

Univariate Distributions

Bivariate Distributions

Bivariate Distributions for context

Accuracy

Column Univariate Bivariate
throws 97.1% 95.0%
bats 96.7% 94.4%
birthDate 96.5% 92.3%
deathDate 96.4% 92.4%
weight 94.8% 92.5%
height 94.2% 92.4%
Total 96.0% 93.5%

Explainer
Accuracy of synthetic data is assessed by comparing the distributions of the synthetic (shown in green) and the original data (shown in gray). For each distribution plot we sum up the deviations across all categories, to get the so-called total variation distance (TVD). The reported accuracy is then simply reported as 100% - TVD. These accuracies are calculated for all univariate and bivariate distributions. A final accuracy score is then calculated as the average across all of these.

Similarity


Explainer
These plots show the first 3 principal components of training samples, synthetic samples, and (if available) holdout samples within the embedding space. The black dots visualize the centroids of the respective samples. The similarity metric then measures the cosine similarity between these centroids. We expect the cosine similarity to be close to 1, indicating that the synthetic samples are as similar to the training samples as the holdout samples are.

Distances

Synthetic vs. Training Data Synthetic vs. Holdout Data Training vs. Holdout Data
Identical Matches 0.1% 0.0% 0.6%
Average Distances 0.323 0.325 0.317
DCR Share 49.9%


Explainer
Synthetic data shall be as close to the original training samples, as it is close to original holdout samples, which serve us as a reference. This can be asserted empirically by measuring distances between synthetic samples to their closest original samples, whereas training and holdout sets are sampled to be of equal size. A green line that is significantly left of the dark gray line implies that synthetic samples are closer to the training samples than to the holdout samples, indicating that the data has overfitted to the training data. A green line that overlays with the dark gray line validates that the trained model indeed represents the general rules, that can be found in training just as well as in holdout samples. The DCR share indicates the proportion of synthetic samples that are closer to a training sample than to a holdout sample, and ideally, this value should not significantly exceed 50%, as a higher value could indicate overfitting.